318 research outputs found

    Bayesian Incentive Compatibility via Fractional Assignments

    Full text link
    Very recently, Hartline and Lucier studied single-parameter mechanism design problems in the Bayesian setting. They proposed a black-box reduction that converted Bayesian approximation algorithms into Bayesian-Incentive-Compatible (BIC) mechanisms while preserving social welfare. It remains a major open question if one can find similar reduction in the more important multi-parameter setting. In this paper, we give positive answer to this question when the prior distribution has finite and small support. We propose a black-box reduction for designing BIC multi-parameter mechanisms. The reduction converts any algorithm into an eps-BIC mechanism with only marginal loss in social welfare. As a result, for combinatorial auctions with sub-additive agents we get an eps-BIC mechanism that achieves constant approximation.Comment: 22 pages, 1 figur

    Exploiting Metric Structure for Efficient Private Query Release

    Get PDF
    We consider the problem of privately answering queries defined on databases which are collections of points belonging to some metric space. We give simple, computationally efficient algorithms for answering distance queries defined over an arbitrary metric. Distance queries are specified by points in the metric space, and ask for the average distance from the query point to the points contained in the database, according to the specified metric. Our algorithms run efficiently in the database size and the dimension of the space, and operate in both the online query release setting, and the offline setting in which they must in polynomial time generate a fixed data structure which can answer all queries of interest. This represents one of the first subclasses of linear queries for which efficient algorithms are known for the private query release problem, circumventing known hardness results for generic linear queries

    Making the Most of Your Samples

    Full text link
    We study the problem of setting a price for a potential buyer with a valuation drawn from an unknown distribution DD. The seller has "data"' about DD in the form of mβ‰₯1m \ge 1 i.i.d. samples, and the algorithmic challenge is to use these samples to obtain expected revenue as close as possible to what could be achieved with advance knowledge of DD. Our first set of results quantifies the number of samples mm that are necessary and sufficient to obtain a (1βˆ’Ο΅)(1-\epsilon)-approximation. For example, for an unknown distribution that satisfies the monotone hazard rate (MHR) condition, we prove that Θ~(Ο΅βˆ’3/2)\tilde{\Theta}(\epsilon^{-3/2}) samples are necessary and sufficient. Remarkably, this is fewer samples than is necessary to accurately estimate the expected revenue obtained by even a single reserve price. We also prove essentially tight sample complexity bounds for regular distributions, bounded-support distributions, and a wide class of irregular distributions. Our lower bound approach borrows tools from differential privacy and information theory, and we believe it could find further applications in auction theory. Our second set of results considers the single-sample case. For regular distributions, we prove that no pricing strategy is better than 12\tfrac{1}{2}-approximate, and this is optimal by the Bulow-Klemperer theorem. For MHR distributions, we show how to do better: we give a simple pricing strategy that guarantees expected revenue at least 0.5890.589 times the maximum possible. We also prove that no pricing strategy achieves an approximation guarantee better than e4β‰ˆ.68\frac{e}{4} \approx .68

    Multi-disk subsystem organizations for very large databases

    Get PDF
    This thesis investigates efficient mappings of very large databases with non-uniform access to its data. to a. multi-disk subsystem. Two algorithms are developed to distribute the database across multiple disks, possibly with replication, in order to minimize latency and maximize throughput. These algorithms are compared with respect to the amount of replication overhead incurred to achieve desired throughput. A simulator is developed to simulate these two mapping algorithms and investigate the efficiency of these two mappings

    The Sample Complexity of Auctions with Side Information

    Full text link
    Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction. We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.Comment: A version of this paper appeared in STOC 201
    • …
    corecore